10 research outputs found

    On the computation and updating of the modified Cholesky decomposition of a covariance matrix

    Get PDF
    Methods for obtaining and updating the modified Cholesky decomposition (MCD) for the particular case of a covariance matrix when one is given only the original data are described. These methods are the standard method of forming the covariance matrix K then solving for the MCD, L and D (where K=LDLT); a method based on Householder reflections; and lastly, a method employing the composite-t algorithm. For many cases in the analysis of remotely sensed data, the composite-t method is the superior method despite the fact that it is the slowest one, since (1) the relative amount of time computing MCD's is often quite small, (2) the stability properties of it are the best of the three, and (3) it affords an efficient and numerically stable procedure for updating the MCD. The properties of these methods are discussed and FORTRAN programs implementing these algorithms are listed

    The recursive maximum likelihood proportion estimator: User's guide and test results

    Get PDF
    Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator

    A quasi-Newton approach to optimization problems with probability density constraints

    Get PDF
    A quasi-Newton method is presented for minimizing a nonlinear function while constraining the variables to be nonnegative and sum to one. The nonnegativity constraints were eliminated by working with the squares of the variables and the resulting problem was solved using Tapia's general theory of quasi-Newton methods for constrained optimization. A user's guide for a computer program implementing this algorithm is provided

    An Analysis of Applications Development Systems for Remotely Sensed, Multispectral Data for the Earth Observations Division of the NASA Lyndon B. Johnson Space Center

    Get PDF
    An application development system (ADS) is examined for remotely sensed, multispectral data at the Earth Observations Division (EOD) at Johnson Space Center. Design goals are detailed, along with design objectives that an ideal system should contain. The design objectives were arranged according to the priorities of EOD's program objectives. Four systems available to EOD were then measured against the ideal ADS as defined by the design objectives and their associated priorities. This was accomplished by rating each of the systems on each of the design objectives. Utilizing the established priorities, it was determined how each system stood up as an ADS. Recommendations were made as to possible courses of action for EOD to pursue to obtain a more efficient ADS

    The use of the Winograd matrix multiplication algorithm in digital multispectral processing

    Get PDF
    The Winograd procedure for matrix multiplication provides a method whereby general matrix products may be computed more efficiently than the normal method. The algorithm and the time savings that can be effected are described. A FORTRAN program is provided which performs a general matrix multiply according to this algorithm. A variation of this procedure that may be used to calculate Gaussian probability density functions is also described. It is shown how a time savings can be effected in this calculation. The extension of this method to other similar calculations should yield similar savings

    Classification improvement by optimal dimensionality reduction when training sets are of small size

    Get PDF
    A computer simulation was performed to test the conjecture that, when the sizes of the training sets are small, classification in a subspace of the original data space may give rise to a smaller probability of error than the classification in the data space itself; this is because the gain in the accuracy of estimation of the likelihood functions used in classification in the lower dimensional space (subspace) offsets the loss of information associated with dimensionality reduction (feature extraction). A number of pseudo-random training and data vectors were generated from two four-dimensional Gaussian classes. A special algorithm was used to create an optimal one-dimensional feature space on which to project the data. When the sizes of the training sets are small, classification of the data in the optimal one-dimensional space is found to yield lower error rates than the one in the original four-dimensional space

    An algorithm for optimal single linear feature extraction from several Gaussian pattern classes

    Get PDF
    A computational algorithm is presented for the extraction of an optimal single linear feature from several Gaussian pattern classes. The algorithm minimizes the increase in the probability of misclassification in the transformed (feature) space. Numerical results on the application of this procedure to the remotely sensed data from the Purdue Cl flight line as well as LANDSAT data are presented. It was found that classification using the optimal single linear feature yielded a value for the probability of misclassification on the order of 30% less than that obtained by using the best single untransformed feature. Also, the optimal single linear feature gave performance results comparable to those obtained by using the two features which maximized the average divergence
    corecore